Soft actuators have attracted a great deal of interest in the context of rehabilitative and assistive robots for increasing safety and lowering costs as compared to rigid-body robotic systems. During actuation, soft actuators experience high levels of deformation, which can lead to microscale fractures in their elastomeric structure, which fatigues the system over time and eventually leads to macroscale damages and eventually failure. This paper reports finite element modeling (FEM) of pneu-nets at high angles, along with repetitive experimentation at high deformation rates, in order to study the effect and behavior of fatigue in soft robotic actuators, which would result in deviation from the ideal behavior. Comparing the FEM model and experimental data, we show that FEM can model the performance of the actuator before fatigue to a bending angle of 167 degrees with ~96% accuracy. We also show that the FEM model performance will drop to 80% due to fatigue after repetitive high-angle bending. The results of this paper objectively highlight the emergence of fatigue over cyclic activation of the system and the resulting deviation from the computational FEM model. Such behavior can be considered in future controllers to adapt the system with time-variable and non-autonomous response dynamics of soft robots.
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
Developing robust and fair AI systems require datasets with comprehensive set of labels that can help ensure the validity and legitimacy of relevant measurements. Recent efforts, therefore, focus on collecting person-related datasets that have carefully selected labels, including sensitive characteristics, and consent forms in place to use those attributes for model testing and development. Responsible data collection involves several stages, including but not limited to determining use-case scenarios, selecting categories (annotations) such that the data are fit for the purpose of measuring algorithmic bias for subgroups and most importantly ensure that the selected categories/subcategories are robust to regional diversities and inclusive of as many subgroups as possible. Meta, in a continuation of our efforts to measure AI algorithmic bias and robustness (https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set), is working on collecting a large consent-driven dataset with a comprehensive list of categories. This paper describes our proposed design of such categories and subcategories for Casual Conversations v2.
translated by 谷歌翻译
本文提出了一个新颖的框架,用于在参考图中对车辆的实时定位和自负跟踪。核心想法是映射车辆观察到的语义对象,并将其注册到参考图中的相应对象。尽管最近的几项作品利用语义信息进行了跨视图本地化,但这项工作的主要贡献是一种视图不变的公式,该方法使该方法直接适用于可检测到对象的任何观点配置。另一个独特的特征是,由于适用于极端异常相群方案的数据关联方案,环境/对象变化的鲁棒性(例如,关联离群值90%)。为了展示我们的框架,我们考虑了仅使用汽车作为对象将地面车辆定位在参考对象图中的示例。虽然仅使用立体声摄像头用于接地车辆,但我们考虑使用立体声摄像机和激光扫描从地面观点构建了先验地图,并在不同日期捕获的地理参与的空中图像以证明框架对不同方式,观点和观点和观点和观点,观点和观点的稳健性,环境变化。对Kitti数据集的评估表明,在3.7 km的轨迹上,本地化发生在36秒内,其次是在激光雷达参考图中的平均位置误差为8.5 m,在空中对象图中的平均位置误差为8.5 m,其中77%对象是离群值,在71秒内实现定位,平均位置误差为7.9 m。
translated by 谷歌翻译
在处理多点测量时,即传统的黑盒优化方法效率低下,即,当控制域中的每个查询需要在次级域中的一组测量以计算目标时。在粒子加速器中,四极扫描的发射率调整是具有多点测量的优化示例。尽管发射率是高亮度机器(包括X射线激光器和线性碰撞者)的性能的关键参数,但综合优化通常受到调整所需的时间的限制。在这里,我们将最近提供的贝叶斯算法执行(BAX)扩展到具有多点测量的优化任务。 BAX通过在关节控制测量域中选择和建模各个点来实现样品效率。我们将BAX应用于Linac相干光源(LCLS)和晚期加速器实验测试II(Facet-II)粒子加速器的设施。在LCLS模拟环境中,我们表明BAX的效率提高了20倍,同时与传统优化方法相比,噪声也更强。此外,我们在LCLS和facet-II上运行了Bax,与Facet-II的手工调整发射率相匹配,并获得了比LCLS在LCLS上获得的最佳发射率低24%。我们预计我们的方法很容易适应其他类型的优化问题,这些优化问题涉及科学仪器中常见的多点测量。
translated by 谷歌翻译
机器学习(ML)方法可以有效地分析数据,识别其中的模式并做出高质量的预测。良好的预测通常与无法以人类可读的方式呈现检测到的模式一起进行。技术发展最近导致了可解释的人工智能(XAI)技术,旨在打开此类黑盒,并使人类能够从检测到的模式中获得新的见解。我们研究了XAI在特定见解可能对消费者行为产生重大影响的领域,即用电。知道对个人电力消耗的特定反馈触发了资源保护,我们从电力消耗时间序列中创建了五个可视化,考虑到现有的特定领域的设计知识,从电力消耗时间序列中获得了五个可视化。我们对152名参与者进行的实验评估表明,人类可以通过XAI可视化表现出的模式来吸收,但是这种可视化应遵循已知的可视化模式,以便用户妥善理解。
translated by 谷歌翻译
我们介绍了一种新颖的深度学习方法,用于使用高分辨率的多光谱空中图像在城市环境中检测单个树木。我们使用卷积神经网络来回归一个置信图,指示单个树的位置,该位置是使用峰查找算法本地化的。我们的方法通过检测公共和私人空间中的树木来提供完整的空间覆盖范围,并可以扩展到很大的区域。在我们的研究区域,跨越南加州的五个城市,我们的F评分为0.735,RMSE为2.157 m。我们使用我们的方法在加利福尼亚城市森林中生产所有树木的地图,这表明我们有可能在前所未有的尺度上支持未来的城市林业研究。
translated by 谷歌翻译
随着深度学习算法在时间序列分类中的应用越来越多,尤其是在高风化场景中,解释这些算法的相关性成为关键。尽管时间序列的可解释性研究已经增长,但从业者的可访问性仍然是一个障碍。没有统一的API或框架,使用的可解释性方法及其可视化的使用方式多样。为了缩小这一差距,我们介绍了TSInterpret易于扩展的开源Python库,用于解释将现有解释方法结合到一个统一框架中的时间序列分类器的预测。库功能(i)最先进的可解释性算法,(ii)公开了统一的API,使用户能够始终如一地使用解释,并为每种说明提供合适的可视化。
translated by 谷歌翻译
通常,大型数据集使深度学习模型能够以良好的准确性和可推广性能。但是,大规模的高保真仿真数据集(来自分子化学,天体物理学,计算流体动力学(CFD)等,由于维度和存储限制,策划的策划可能具有挑战性。损失的压缩算法可以帮助减轻存储的限制,只要很长时间保留了总体数据保真度。为了说明这一点,我们证明了对佩斯卡尔CFD模拟的数据进行了训练和测试的深度学习模型,对在语义细分问题中有损耗的压缩期间引入的错误是可靠的。我们的结果表明,有损压缩算法提供了一种现实的途径,可以将高保真科学数据暴露到开放源数据存储库中,以构建社区数据集。在本文中,我们概述,构建和评估建立大数据框架的要求,在https:// bastnet上证明。 github.io/,用于科学机器学习。
translated by 谷歌翻译
深度卷积神经网络(DCNNS)在面部识别方面已经达到了人类水平的准确性(Phillips等,2018),尽管目前尚不清楚它们如何准确地区分高度相似的面孔。在这里,人类和DCNN执行了包括相同双胞胎在内的具有挑战性的面貌匹配任务。参与者(n = 87)查看了三种类型的面孔图像:同一身份,普通冒名顶替对(来自相似人口组的不同身份)和双胞胎冒名顶替对(相同的双胞胎兄弟姐妹)。任务是确定对是同一个人还是不同的人。身份比较在三个观点区分条件下进行了测试:额叶至额叶,额叶至45度,额叶为90度。在每个观点 - 差异条件下评估了从双胞胎突变器和一般冒险者区分匹配的身份对的准确性。人类对于一般撞击对比双重射手对更准确,准确性下降,一对图像之间的观点差异增加。通过介绍给人类的同一图像对测试了经过训练的面部识别的DCNN(Ranjan等,2018)。机器性能反映了人类准确性的模式,但除了一种条件以外,所有人的性能都处于或尤其是所有人的表现。在所有图像对类型中,比较了人与机器的相似性得分。该项目级别的分析表明,在九种图像对类型中的六种中,人类和机器的相似性等级显着相关[范围r = 0.38至r = 0.63],这表明人类对面部相似性的感知和DCNN之间的一般协议。这些发现还有助于我们理解DCNN的表现,以区分高度介绍面孔,表明DCNN在人类或以上的水平上表现出色,并暗示了人类和DCNN使用的特征之间的均等程度。
translated by 谷歌翻译